The term Support Vector Machines (SVM's) is sometimes used loosely to refer to three methods, each an extension of the previous method1:
SVM's are a common supervised discriminative algorithm, well suited to complex small- to medium sized datasets2
They can be used for both classification and regression.
To demonstrate SVM's I'll be using Fisher's (or Anderson's) Iris flowers dataset3.
The dataset consists of 50 samples each from three species of Iris (Iris setosa, Iris virginica and Iris versicolor).
Figure 1: Iris Flowers
Five attributes were collected for the 150 records.
species (the target)
| Sepal length (cm) | Sepal width (cm) | Petal length (cm) | Petal width (cm) | Species | |
|---|---|---|---|---|---|
| 0 | 5.1 | 3.5 | 1.4 | 0.2 | Setosa |
| 1 | 4.9 | 3.0 | 1.4 | 0.2 | Setosa |
| 2 | 4.7 | 3.2 | 1.3 | 0.2 | Setosa |
| 3 | 4.6 | 3.1 | 1.5 | 0.2 | Setosa |
| 4 | 5.0 | 3.6 | 1.4 | 0.2 | Setosa |
Figure 2: Iris Attributes
In $P$-dimensional space, a hyperplane is a flat affine subspace of dimension $P-1$.
Examples
In more detail1:
Two-dimensions: $\beta_0 + \beta_1X_1 + \beta_2X_2 = 0$
Three-dimensions: $\beta_0 + \beta_1X_1 + \beta_2X_2 + \beta_3X_3 = 0$
P-dimensions: $\beta_0 + \beta_1X_1 + \beta_2X_2 + ... + \beta_pX_p = 0$
If $X = (X_1, ..., X_p)^T$ satisfies above, then it is a point on the hyperplane.
If $\beta_0 + \beta_1X_1 + \beta_2X_2 + ... + \beta_pX_p > 0$, it lies on one side of the hyperplane,
so if, $\beta_0 + \beta_1X_1 + \beta_2X_2 + ... + \beta_pX_p < 0$, its on the other side.
We aim to classify an $n \times p$ matrix of $n$ observations in $p$ dimensional space, with these observations falling into two classes $y_1,...,y_n \in \{-1,1\}$.
If we were to perfectly separate the classes the hyperplane would have the property that1:
$\beta_0 + \beta_1X_{i1} + \beta_2X_{i2} + ... + \beta_pX_{ip} > 0 \quad \text{if} \ y_i = 1$,
$\beta_0 + \beta_1X_{i1} + \beta_2X_{i2} + ... + \beta_pX_{ip} < 0 \quad \text{if} \ y_i = -1$.
For a new test observations $x^*$, we would look at the sign of:
$$f(x^*) = \beta_0 + \beta_1x_1^* + \beta_2x_2^* + ... + \beta_px_p^*.$$We would assign it to class 1 if $f(x^*)$ is positive and class -1 if negative.
Furthermore, we could use the magnitude of $f(x^*)$ to indicate how far the point lies from the hyperplane.
We need a reasonable way of constucting a hyperplane, out of the possible choices.
Maximal margin hyperplanes look at getting the hyperplane that is the furthest from the training observations - we compute the perpendicular distance from each training observation to a given separating hyperplane.
The maximal margin hyperplane is the separating hyperplane for which the margin is largest.
We hope the classifier with a large margin on the training data will generalise well to unseen test observations.
Another way of defining our hyperplane is:
$$\mathbf{w}^T\mathbf{x} + b = 0.$$Our margins, where our labels $y \in \{-1,1\}$, are then
$$\mathbf{w}^T\mathbf{x} + b \geq 0 \text{ for } y_i +1$$$$\mathbf{w}^T\mathbf{x} + b < 0 \text{ for } y_i -1$$We can see below there are two equidistant points from the maximal margin hyperplane, lying on the dashed lines. These observations are called Support Vectors.
We can call these two points, $x_1$ and $x_2$ as below:
$$\mathbf{w}^Tx_1 + b = 1,$$$$\mathbf{w}^Tx_2 + b = -1.$$If we move our support vectors our hyperplane will too.
This is because the maximal margin hyperplane only depends on these support vectors.
Other data points could be moved without the hyperplane moving.
We want to maximise the distance between the margin lines, on which the points lie4.
$$x_1-x_2$$
$$
\frac{\phantom{\quad}\mathbf{w}^Tx_1 + b = 1\\ - \mathbf{w}^Tx_2 + b = -1 \\}{\\ \mathbf{w}^T(x_1 - x_2) = 2}
$$
$$
\frac{\mathbf{w}^T}{||\mathbf{w}||}(x_1 - x_2) = \frac{2}{||\mathbf{w}||}
$$
$\text{max}\frac{2}{||\mathbf{w}||}$, while classifying everything correctly, $y_i(\mathbf{w}^T\mathbf{x}_i+b) \geq 1 \quad \forall_i$.
As our two margin lines are parallel and no training points fall between them, we can find the pair of hyperplanes which gives the maximum margin by minimizing $||\mathbf{w}||^2$. So now we have5:
${\text{min} \atop \mathbf{w}, b}\frac{1}{2}||\mathbf{w}||^2 \quad \text{s.t.} \quad y_i(\mathbf{w}^T\mathbf{x}_i+b) \geq 1 \quad \forall_i$,
which is easier because this is a convex quadratic optimisation problem which is efficiently solvable using quadratic programming algorithms.
This requires a Lagrangian formulation of the problem so we introduce Lagrange multipliers, $\alpha_i, i = 1, \ldots , l$, where $l$ is the number of training points:
$$ \min L_P = \frac{1}{2}||\mathbf{w}||^2 - \sum^l_{i=1} \alpha_iy_i(\mathbf{x}_i\cdot\mathbf{w}+b) + \sum^l_{i=1} \alpha_i \qquad \text{s.t.} \quad \forall_i \alpha_i \geq 0 $$Removes the dependence on $\mathbf{w}$ and $b$,
$$ \max L_D = \sum_i\alpha_i - \frac{1}{2}\sum_{i,j}\alpha_i\alpha_jy_iy_j\mathbf{x}_i\cdot \mathbf{x}_j \qquad \text{s.t.} \quad \forall_i \alpha_i \geq 0. $$To achive this we first need to set a constaint that allows us to solve for $\alpha_i$,
$$\sum_i\alpha_iy_i = 0.$$Knowing our $\alpha_i$ means we can find the weights, which are a linear combination of the training inputs, $\mathbf{x}_i$, training outputs, $y_i$, and the values of $\alpha$,
$$\mathbf{w} = \sum^{N_S}_{i=1}\alpha_iy_i\mathbf{x}_i,$$where $N_S$ is the number of support vectors6.
$b$ is then implicitly determined by choosing any $i$ for which $\alpha \neq 0$ and computing $b$ using the following Karush-Kuhn-Tucker (KKT) condition: $$ \alpha_i(y_i(\mathbf{w} \cdot \mathbf{x}_i + b) - 1) = 0 \quad \forall i $$
Question: But why bother doing this? That was a lot of effort, why not just solve the original problem?
Answer: Because this will let us solve the problem by computing the just the inner products of $\mathbf{x}_i$, $\mathbf{x}_j$ which will be important when we want to solve non-linearly separable classification problems.
Case 1 Two features $x_i$, $x_j$ are completely dissimilar (orthogonal), so their dot product is 0. They don't contribute to $L$.
Case 2 Two features $x_i$, $x_j$ are similar and predict the same output value $y_i$ (ie. both $+1$ or $-1$). This means $y_i \times y_j = 1$ and the value $\alpha_i\alpha_jy_iy_j\mathbf{x}_i\mathbf{x}_j$ is positive. However this decreases the value of $L$, due to subtracting from the first term sum, $\sum_i^l\alpha_i$, so the algorithm downgrades similar feature vectors that make the same prediction.
Case 3 : Two features $x_i$, $x_j$ predict opposite predictions about the output value $y_i$ (ie. one $+1$ and the other $-1$), but are otherwise similar. The value $\alpha_i\alpha_jy_iy_j\mathbf{x}_i\mathbf{x}_j$ is negative and since we are subtracting it, this adds to the sum. These are the examples we are looking for as they are critical for telling the two classes apart.
One we have a trained Support Vector Machine, any new test data has its class determined by which side of the decision boundary the observation lands.
We take the class of $\mathbf{x^*}$ to be $\text{sgn}(\mathbf{w} \cdot \mathbf{x} + b)$.
Maximal Margin Classifiers are sensitive to outliers.
In other cases, no exact linear separating hyperplane exists. Therefore we may want to use a hyperplane that almost separates the two classes, allowing some errors, using a soft margin (Support Vector Classifier).
Furthermore, if we have a large number of features, this approach often leads to overfitting.
Now might be a good time to try exercises 1-3.